Yes. In this paper, we investigate strong lottery tickets in generative models, the subnetworks that achieve good generative performance without any weight update. Neural network pruning is considered the main cornerstone of model compression for reducing the costs of computation and memory. Unfortunately, pruning a generative model has not been extensively explored, and all existing pruning algorithms suffer from excessive weight-training costs, performance degradation, limited generalizability, or complicated training. To address these problems, we propose to find a strong lottery ticket via moment-matching scores. Our experimental results show that the discovered subnetwork can perform similarly or better than the trained dense model even when only 10% of the weights remain. To the best of our knowledge, we are the first to show the existence of strong lottery tickets in generative models and provide an algorithm to find it stably. Our code and supplementary materials are publicly available.
translated by 谷歌翻译
了解视频的时间动态是学习更好的视频表示的重要方面。最近,由于其能力捕获了输入序列的长期依赖性,因此对基于变压器的架构设计进行了广泛的探索。但是,我们发现这些视频变压器仍然有偏见地学习空间动力学而不是时间动力学,而伪造的虚假相关性对于它们的性能至关重要。根据观察结果,我们设计了简单而有效的自我监督任务,以便视频模型更好地学习时间动态。具体而言,对于借鉴空间偏见,我们的方法将视频框架的时间顺序作为额外的自我设计,并强制执行随机洗牌的框架以具有低信心的输出。此外,我们的方法还学习了连续帧之间视频令牌的时间流动方向,以增强与时间动力学的相关性。在各种视频动作识别任务下,我们证明了我们的方法的有效性及其与最先进的视频变压器的兼容性。
translated by 谷歌翻译
Recent self-supervised video representation learning methods focus on maximizing the similarity between multiple augmented views from the same video and largely rely on the quality of generated views. However, most existing methods lack a mechanism to prevent representation learning from bias towards static information in the video. In this paper, we propose frequency augmentation (FreqAug), a spatio-temporal data augmentation method in the frequency domain for video representation learning. FreqAug stochastically removes specific frequency components from the video so that learned representation captures essential features more from the remaining information for various downstream tasks. Specifically, FreqAug pushes the model to focus more on dynamic features rather than static features in the video via dropping spatial or temporal low-frequency components. To verify the generality of the proposed method, we experiment with FreqAug on multiple self-supervised learning frameworks along with standard augmentations. Transferring the improved representation to five video action recognition and two temporal action localization downstream tasks shows consistent improvements over baselines.
translated by 谷歌翻译
了解文档图像(例如,发票)是一个重要的研究主题,并在文档处理自动化中具有许多应用。通过基于深度学习的光学字符识别(OCR)的最新进展,目前的视觉文档了解(VDU)系统已经基于OCR设计。虽然这种基于OCR的方法承诺合理的性能,但它们遭受了由OCR引起的关键问题,例如(1)(1)昂贵的计算成本和(2)由于OCR误差传播而导致的性能下降。在本文中,我们提出了一种新颖的VDU模型,即结束可训练而不支撑OCR框架。为此,我们提出了一个新的任务和合成文档图像生成器,以预先列车,以减轻大规模实体文档图像上的依赖关系。我们的方法在公共基准数据集和私营工业服务数据集中了解各种文档的最先进的性能。通过广泛的实验和分析,我们展示了拟议模型的有效性,特别是考虑到真实世界的应用。
translated by 谷歌翻译
利用源区和目标域之间的张建空间是最近无监督的域适应方法之一。然而,标签的平衡崩溃问题,源标签在邻居实例的预测中占据了目标标签的主导地位,从未得到解决。在本文中,我们提出了一个实例 - 方面的最小策略,最小化了张开的空间中的高不确定性实例的熵,以解决它。我们通过最低限度问题的解决方案将大亨空间分为两个子空间:对比空间和共识空间。在对比的空间中,通过约束实例来减轻域间差异,以具有对比度视图和标签,并且共识空间减少了域内类别之间的混淆。我们的方法的有效性在公共基准上证明,包括办公室-31,办公室和visda-c,这实现了最先进的表演。我们进一步表明,我们的方法在PACS上表明了当前最先进的方法,这表示我们的实例 - 方面的方法适用于多源域适应。
translated by 谷歌翻译
变形金刚正在改变计算机视觉的景观,特别是对于识别任务。检测变压器是对象检测的第一个完全结束的学习系统,而视觉变压器是用于图像分类的第一个完全变压器的架构。在本文中,我们集成了视觉和检测变压器(Vidt)以构建有效和高效的物体探测器。 VIDT引入了重新配置的注意模块,将最近的Swin变压器扩展为独立对象检测器,然后是计算高效的变压器解码器,该解码器利用多尺度特征和辅助技术来提高检测性能,而无需多大增加计算负载。 Microsoft Coco基准数据集上的广泛评估结果表明,VIDT在现有的基于变压器的对象检测器中获得了最佳的AP和延迟折衷,并且由于大型型号的高可扩展性而实现了49.2AP。我们将在https://github.com/naver-ai/vidt发布代码和培训的型号
translated by 谷歌翻译
Vision Transformer (ViT) extends the application range of transformers from language processing to computer vision tasks as being an alternative architecture against the existing convolutional neural networks (CNN). Since the transformer-based architecture has been innovative for computer vision modeling, the design convention towards an effective architecture has been less studied yet. From the successful design principles of CNN, we investigate the role of spatial dimension conversion and its effectiveness on transformer-based architecture. We particularly attend to the dimension reduction principle of CNNs; as the depth increases, a conventional CNN increases channel dimension and decreases spatial dimensions. We empirically show that such a spatial dimension reduction is beneficial to a transformer architecture as well, and propose a novel Pooling-based Vision Transformer (PiT) upon the original ViT model. We show that PiT achieves the improved model capability and generalization performance against ViT. Throughout the extensive experiments, we further show PiT outperforms the baseline on several tasks such as image classification, object detection, and robustness evaluation. Source codes and ImageNet models are available at https://github.com/naver-ai/pit.
translated by 谷歌翻译
Regional dropout strategies have been proposed to enhance the performance of convolutional neural network classifiers. They have proved to be effective for guiding the model to attend on less discriminative parts of objects (e.g. leg as opposed to head of a person), thereby letting the network generalize better and have better object localization capabilities. On the other hand, current methods for regional dropout remove informative pixels on training images by overlaying a patch of either black pixels or random noise. Such removal is not desirable because it leads to information loss and inefficiency during training. We therefore propose the CutMix augmentation strategy: patches are cut and pasted among training images where the ground truth labels are also mixed proportionally to the area of the patches. By making efficient use of training pixels and retaining the regularization effect of regional dropout, CutMix consistently outperforms the state-of-the-art augmentation strategies on CI-FAR and ImageNet classification tasks, as well as on the Im-ageNet weakly-supervised localization task. Moreover, unlike previous augmentation methods, our CutMix-trained ImageNet classifier, when used as a pretrained model, results in consistent performance gains in Pascal detection and MS-COCO image captioning benchmarks. We also show that CutMix improves the model robustness against input corruptions and its out-of-distribution detection performances. Source code and pretrained models are available at https://github.com/clovaai/CutMix-PyTorch.
translated by 谷歌翻译
Masked image modeling (MIM) performs strongly in pre-training large vision Transformers (ViTs). However, small models that are critical for real-world applications cannot or only marginally benefit from this pre-training approach. In this paper, we explore distillation techniques to transfer the success of large MIM-based pre-trained models to smaller ones. We systematically study different options in the distillation framework, including distilling targets, losses, input, network regularization, sequential distillation, etc, revealing that: 1) Distilling token relations is more effective than CLS token- and feature-based distillation; 2) An intermediate layer of the teacher network as target perform better than that using the last layer when the depth of the student mismatches that of the teacher; 3) Weak regularization is preferred; etc. With these findings, we achieve significant fine-tuning accuracy improvements over the scratch MIM pre-training on ImageNet-1K classification, using all the ViT-Tiny, ViT-Small, and ViT-base models, with +4.2%/+2.4%/+1.4% gains, respectively. Our TinyMIM model of base size achieves 52.2 mIoU in AE20K semantic segmentation, which is +4.1 higher than the MAE baseline. Our TinyMIM model of tiny size achieves 79.6% top-1 accuracy on ImageNet-1K image classification, which sets a new record for small vision models of the same size and computation budget. This strong performance suggests an alternative way for developing small vision Transformer models, that is, by exploring better training methods rather than introducing inductive biases into architectures as in most previous works. Code is available at https://github.com/OliverRensu/TinyMIM.
translated by 谷歌翻译
Few Shot Instance Segmentation (FSIS) requires models to detect and segment novel classes with limited several support examples. In this work, we explore a simple yet unified solution for FSIS as well as its incremental variants, and introduce a new framework named Reference Twice (RefT) to fully explore the relationship between support/query features based on a Transformer-like framework. Our key insights are two folds: Firstly, with the aid of support masks, we can generate dynamic class centers more appropriately to re-weight query features. Secondly, we find that support object queries have already encoded key factors after base training. In this way, the query features can be enhanced twice from two aspects, i.e., feature-level and instance-level. In particular, we firstly design a mask-based dynamic weighting module to enhance support features and then propose to link object queries for better calibration via cross-attention. After the above steps, the novel classes can be improved significantly over our strong baseline. Additionally, our new framework can be easily extended to incremental FSIS with minor modification. When benchmarking results on the COCO dataset for FSIS, gFSIS, and iFSIS settings, our method achieves a competitive performance compared to existing approaches across different shots, e.g., we boost nAP by noticeable +8.2/+9.4 over the current state-of-the-art FSIS method for 10/30-shot. We further demonstrate the superiority of our approach on Few Shot Object Detection. Code and model will be available.
translated by 谷歌翻译